Skip to content

Conversation

chtyler
Copy link
Contributor

@chtyler chtyler commented Aug 7, 2025

…e - modified distribution image name in example yaml and modified prereq

Description

The LlamaStackDistribution example has been modified to change the location of the distribution image. In addition, content has been removed that is now invalid. Finally, a prerequisite item has been modified so that it more realistically represents the task that should have been carried out beforehand.

How Has This Been Tested?

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Documentation
    • Changed prerequisite wording to indicate activation of the Llama Stack Operator.
    • Switched image guidance to use the internal alias "rh-dev" (notes explaining automatic resolution to registry images).
    • Updated example configuration to remove storage block and reflect the new image approach.
    • Added explicit OpenShift CLI login steps and instructions to create an inference-model secret.
    • Added conditional guidance for disconnected/mirrored-image environments and clarified integration with a vLLM-served Llama 3.2 model.

Copy link

coderabbitai bot commented Aug 7, 2025

Walkthrough

Updated documentation: changed prerequisite wording for the Llama Stack Operator and revised LlamaStackDistribution guidance to use an internal image reference (rh-dev) instead of explicit distribution.image and removed the storage block; added OpenShift CLI steps and a secret creation example; introduced conditional blocks for disconnected environments.

Changes

Cohort / File(s) Change Summary
Llama model with KServe doc
modules/deploying-a-llama-model-with-kserve.adoc
Wording change: prerequisite updated from "installed the Llama Stack Operator" to "activated the Llama Stack Operator."
LlamaStackDistribution doc
modules/deploying-a-llamastackdistribution-instance.adoc
Replaced explicit spec.server.distribution.image with spec.server.distribution.name: rh-dev; removed distribution.image and storage blocks; added NOTE explaining rh-dev is an internal reference resolved by the Operator; added oc login steps and a secret creation YAML for inference model env vars (llama-stack-inference-model-secret with INFERENCE_MODEL, VLLM_URL, VLLM_TLS_VERIFY, VLLM_API_TOKEN); introduced ifdef self-managed[] and ifdef::disconnected[] conditional guidance; formatting adjustments (removed blank line before Procedure).

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
modules/deploying-a-llama-model-with-kserve.adoc (1)

13-16: Anchor/terminology mismatch between “activated” wording and linked section

The prerequisite now says activated the Llama Stack Operator, but the link that follows still points to #installing-the-llama-stack-operator_rag.
Consider renaming the target anchor (and page heading) to activating for consistency, or adjust the prose here back to installed to avoid sending readers to a differently-worded section.

No functional impact, but improving this avoids cognitive friction.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 46e0e7e and be10050.

📒 Files selected for processing (2)
  • modules/deploying-a-llama-model-with-kserve.adoc (1 hunks)
  • modules/deploying-a-llamastackdistribution-instance.adoc (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: bredamc
PR: opendatahub-io/opendatahub-documentation#860
File: modules/using-llm-as-a-judge-metrics-with-lmeval.adoc:62-65
Timestamp: 2025-07-16T09:05:50.422Z
Learning: In the opendatahub-documentation project, bredamc prefers comprehensive error detection in code reviews. Continue to point out potential errors whether they are related to the code changes in the PR or not, but clearly distinguish between errors introduced in the PR versus errors in existing code.
📚 Learning: 2025-07-31T15:09:05.126Z
Learnt from: jiridanek
PR: opendatahub-io/opendatahub-documentation#888
File: modules/about-workbench-images.adoc:105-105
Timestamp: 2025-07-31T15:09:05.126Z
Learning: In the opendatahub-documentation project, ROCm workbench images come pre-integrated with additional frameworks and dependencies that users can utilize, rather than requiring users to integrate frameworks themselves. When describing these images, the focus should be on the pre-integrated capabilities rather than user integration tasks.

Applied to files:

  • modules/deploying-a-llamastackdistribution-instance.adoc

@chtyler chtyler force-pushed the RHAIENG-306-modify-docs-on-deploying-llamastackdistrubution-instance branch 2 times, most recently from 8d5e418 to 1d463aa Compare August 8, 2025 11:37
@chtyler chtyler force-pushed the RHAIENG-306-modify-docs-on-deploying-llamastackdistrubution-instance branch from c8c9509 to ef570e8 Compare August 11, 2025 12:44
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
modules/deploying-a-llamastackdistribution-instance.adoc (1)

114-114: Using distribution.name is correct and resolves the earlier placeholder-image concern

Specifying spec.server.distribution.name: rh-dev (instead of distribution.image) aligns with the operator-managed resolution approach discussed in this PR and addresses the “invalid image reference” feedback from earlier reviews.

Please confirm that the documented Operator version in this release supports resolving the rh-dev alias (so readers on that version won’t see reconciliation failures). If there are version constraints, consider noting them briefly in the surrounding text.

🧹 Nitpick comments (2)
modules/deploying-a-llamastackdistribution-instance.adoc (2)

7-7: Tighten abstract phrasing; keep key directive about distribution.name

Current sentence is repetitive (“You can… You can…”). Consider a concise, active rewrite.

-You can integrate LlamaStack and its retrieval-augmented generation (RAG) capabilities with your deployed Llama 3.2 model served by vLLM. You can use this integration to build intelligent applications that combine large language models (LLMs) with real-time data retrieval, providing more accurate and contextually relevant responses for your AI workloads. When you create a `LlamaStackDistribution` custom resource (CR), specify `rh-dev` in the `spec.server.distribution.name` field. 
+Integrate LlamaStack’s retrieval-augmented generation (RAG) with a vLLM‑served Llama 3.2 model to build applications that combine LLMs with real‑time data retrieval for more accurate, contextually relevant responses. When creating a `LlamaStackDistribution` custom resource (CR), set `spec.server.distribution.name` to `rh-dev`.

117-120: Clarify what ‘rh-dev’ is and add guidance for pinning and disconnected clusters

Good note. Minor tightening and adding two reader‑helpful points: it’s an Operator‑managed alias, pinning implications, and a reminder about mirroring for disconnected setups (you already link to mirroring guidance above).

-[NOTE]
-====
-The `rh-dev` value is an internal image reference. When you create the `LlamaStackDistribution` custom resource, the {productname-short} Operator automatically resolves `rh-dev` to the container image in the appropriate registry. This internal image reference allows the underlying image to update without requiring changes to your custom resource.
-====
+[NOTE]
+====
+`rh-dev` is an Operator‑managed distribution alias. When you apply the `LlamaStackDistribution` custom resource, the {productname-short} Operator resolves this alias to the correct registry image.
+
+This indirection lets the Operator deliver image updates without modifying your custom resource. If you require a pinned version for reproducibility, follow the Operator’s guidance for selecting a fixed release/channel.
+
+For disconnected clusters, ensure the resolved image is mirrored to your internal registry and that image source policies are configured, as described in the mirroring documentation above.
+====
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c8c9509 and ef570e8.

📒 Files selected for processing (2)
  • modules/deploying-a-llama-model-with-kserve.adoc (1 hunks)
  • modules/deploying-a-llamastackdistribution-instance.adoc (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • modules/deploying-a-llama-model-with-kserve.adoc

Copy link
Contributor

@smccarthy-ie smccarthy-ie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@chtyler chtyler merged commit e0536d3 into opendatahub-io:main Aug 11, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants